Neuroscience of Consciousness
◐ Oxford University Press (OUP)
All preprints, ranked by how well they match Neuroscience of Consciousness's content profile, based on 12 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Schwarzkopf, D. S.; Yu, X. A.; Altan, E.; Bouyer, L.; Saurels, B. W.; Pellicano, E.; Arnold, D. H.
Show abstract
Research on mental visual imagery typically relies on vividness ratings. However, vividness is ill-defined as it lacks an objective reference. Here, we present survey results that suggest vividness is nevertheless a robust measure. It explains individual differences of a broad range of subjective experiences, from the detail of mental imagery, the propensity to report having other internally generated visual experiences, and the vividness of visual dreams. Critically, simple vividness ratings can replace the protracted questionnaires commonly used for this purpose and reduce methodological issues with these instruments. We further find that vividness is closely linked with the experience of "seeing" mental images or projecting them into the external world. People who report seeing mental images with their eyes shut are also more likely to experience externally projected imagery. Nevertheless, many people report having mental depictions but without seeing. Overall, our results indicate we should redefine visual aphantasia to distinguish individuals with faint or unseen visual images from those completely lacking a pictorial representation.
Robinson, K.; Zeleznikow-Johnston, A.; Wu, J.; Yoshimura, Y.; Tsuchiya, N.
Show abstract
What is a possible physical substrate of the qualitative aspects of consciousness (qualia)? Answering this question is a central goal of consciousness research. Due to their subjective and ineffable nature, finding a quantitative way to characterise qualia from verbal description has thus far proven elusive. To overcome the challenge of expressing subjective experience, recent structural and relational approaches have been proposed from mathematics. Yet, as far as we know, no attempts have been made to evaluate the relationship between a certain structure of qualia and a structure of its possible underlying physical substrate. Towards this ambitious goal of linking qualia and the physical, we set out to make an empirical first step by focusing on qualia of visual motion in human participants and their possible neural substrate recorded in mouse primary visual cortex. From human participants (N=171), we obtained dissimilarity ratings of visual motion experiences induced by 48 stimuli, spanning across 8 directions and 6 spatial frequencies. Analysis revealed similarity structures of visual motion qualia that were disassociated from similarity structures purely expected from physical parameters or their combinations. From nine individual mice, we recorded single-neuron activity (n=751) with optical imaging in both awake and lightly anaesthetised conditions (isoflurane 0.6-0.8%, which retains neural responses and renders mice unresponsive to sensory stimulation). From neuron population responses to a similar set of motion stimuli, we computed a distance matrix that is comparable to our human dissimilarity matrix. Quantitative analyses show structural commonalities between human qualia structure and mouse neural structure, where a categorical organisation of stimulus direction best explains human qualia structure and mice neural activity. Interestingly, these commonalities held true for both awake and lightly anaesthetised conditions, leaving a possibility that mice may have been unresponsive but conscious of visual motion under light anaesthesia. Finally, we list several empirical factors that can be improved to promote our qualia structure approach in the future.
Shenyan, O.; Lisi, M.; Greenwood, J. A.; Dekker, T. M.
Show abstract
Hallucinatory experiences, defined as perception in the absence of external stimuli, can occur in both pathological and non-pathological states and can be broadly phenomenologically divided into those of a simple and a complex nature. Non-pathological visual hallucinations can be induced experimentally using a variety of stimulation conditions. To assess whether these techniques drive a shared underlying hallucinatory mechanism, despite these differences, we compared two methods: flicker and perceptual deprivation (Ganzfeld). Specifically, we measured the frequency and complexity of the hallucinations produced by these techniques. We utilised button press, retrospective drawing, interviews, and questionnaires to quantify hallucinatory experience in 20 participants. With both experimental techniques, we found that simple hallucinations were more common than complex hallucinations. We also found that on average, flicker was more effective than Ganzfeld at eliciting a higher number of hallucinations, though Ganzfeld hallucinations were longer than flicker hallucinations. There was no interaction between experimental condition and hallucination complexity, suggesting that the increased bottom-up visual input in flicker increased both simple and complex hallucinations similarly. A correlation was observed between the total proportional time spent hallucinating in flicker and Ganzfeld, which was replicated in a retrospective questionnaire measure of experienced intensity, suggesting a shared hallucinatory mechanism between the two methodologies. We attribute these findings to a shared low-level core hallucinatory mechanism, such as excitability of visual cortex, which is amplified in flicker compared to Ganzfeld due to heightened bottom-up input.
Drori, G.; Bar-Tal, P.; Hirsh, O.; Berlinger, A.; Goldberg, N.; Zvilichovsky, Y.; Hertz, U.; Zaidel, A.; Salomon, R.
Show abstract
Our awareness of dreams, hallucinations, and illusions reflects an intriguing human capacity to recognize potentially false perceptions, known as the Sense of Reality (SoR). Though central to mental health, the study of SoR has been hindered by the subjective nature of hallucinatory experiences. Here we employed a novel virtual reality paradigm simulating virtual hallucinations, mirroring the phenomenology of clinical hallucinations. Combining psychophysics, physiological recordings, and computational modeling in one exploratory (n = 31) and one preregistered experiment (n = 32), we found that SoR varied with virtual hallucination magnitude and domain. SoR judgments were associated with distinct motor, pupillary, and cardiac responses, allowing classification of virtual hallucination exposure. We present a computational model in which SoR judgments arise from comparing current sensory input to an internal model of the world. Our results shed light on the age-old question how do we know what is real?
Vanbuckhave, C.; Eikner, J. S.; Laeng, B.; Onnis, L.; Mathot, S.
Show abstract
Previous research has shown that the eyes pupils are larger when imaging dark as compared to bright objects or scenes. Based on this, it has been claimed that pupil size is a sensitive marker of mental-imagery vividness. We investigated this claim in three experiments, conducted in two countries (Norway and The Netherlands; Ntotal = 115), in which participants read, listened, or freely imagined stories that evoked a sense of darkness or brightness. In addition, participants rated their imagery vividness after each story, as a measure of moment-to-moment fluctuations in imagery; and their imagery vividness in general, as a measure of individual differences in imagery. Overall, we found that darkness-evoking stories induced larger pupils than brightness-evoking stories, although this effect was highly variable and only statistically reliable for longer (> 1 min) audio stories. Importantly, we consistently found that this pupil-size difference (dark - bright) was largest for vividly imagined stories. Finally, we did not find any relationship between this pupil-size difference and individual differences in general in imagery. We conclude that the strength of pupil-size changes in response to imagined darkness or brightness reflects moment-to-moment fluctuations in imagery vividness within an individual rather than individual differences in imagery vividness as a personal trait.
Huang, W.; Zhang, F.; Zhang, C.; Wang, C.; Zhang, S.; Pu, Y.; Kong, X.-Z.
Show abstract
Perceptual stimulis emotional properties are vital for human evolution and adaptation. While visual imagery is predominantly regarded as a weak form of perception, the influence of cross-modal emotional features on imagery is still unknown. The present study aims to investigate how emotional prosody modulates imagery quality (i.e., accuracy and clarity) and neural mechanisms using a combination of behavioral tasks and functional magnetic resonance imaging (fMRI). At the behavioral level, our results showed that frustrated conditions induced significant prosody effects on visual mental imagery quality measures, and the effects were particularly pronounced in individuals with lower imagery use tendency. At the neural level, compared with the neutral condition, the emotional prosody conditions (both happy and frustrated) showed stronger activation in various regions including the middle occipital gyrus, supporting the critical role of primary visual system in imagery. Moreover, compared to the frustrated prosody condition, the happy prosody showed stronger activation in the precuneus and anterior cingulate cortex, which are core components of the default mode network. A machine learning prediction analysis with a random forest model identified a significant brain-behavioral correlation between prosody-linked neural activity and individual imagery use tendency. A subsequent Shapley Additive exPlanations (SHAP) analysis further highlighted the primary visual and default mode regions as top contributors to this prediction. Taken together, these results provide new insights for the understanding of how emotional prosody modulates visual mental imagery, considering individual differences, and provide compelling evidence for incorporating emotion as important shaping factor in more general model for imagery.
Seppälä, K.; Rantala, N.; Hudson, M.; Putkinen, V. J.; Santavirta, S.; Hyönä, J.; Nummenmaa, L.
Show abstract
BACKGROUNDFear is a fundamental survival mechanism that both enhances the chances of survival by rapid detection and adaptive responses to potential threat and by optimizing sensory input and cognitive processing. Here we used naturalistic design with eye tracking to map the spatiotemporal dynamics of attention and arousal during fearful events with slow and fast temporal dynamics. METHODS21 participants watched a full-length horror movie while their eye-movements were recoded with eye tracker. Moment-to-moment intensity of sustained fear as well as the onsets of the jump scares (acute fear) were annotated and used to predict gaze parameters (fixation duration & counts, blink frequency, saccade amplitude, pupil size and intersubject synchronization of gaze position). RESULTSAcute fear events led to shortening of fixation duration, suppression of blinking, as well as increase of fixation count, saccade amplitude, and pupil size. Sustained fear was in turn associated with increased pupil size and decrease in blinking and saccade amplitude. These effects remained significant when controlling for luminosity. CONCLUSIONSDuring natural vision both acute and sustained fear cause rapid reconfiguration, state dependent and individual changes in visual attention prioritizing that is accompanied by increased affective arousal.
Grove, E.; Hewitt, T.; Seth, A. K.; Macpherson, F.; Schwartzman, D.
Show abstract
Visual hallucinations (VHs) occur across psychedelic states and diverse psychiatric and neurological conditions, yet their phenomenology remains difficult to characterise. Empirical research on VHs is hindered by the lack of large-scale phenomenological datasets, which limits both mechanistic accounts and the systematic characterisation of when and how they arise. Stroboscopic light stimulation (SLS) viewed with closed eyes provides a reliable, non-pharmacological method of inducing VHs in healthy populations. These hallucinations typically consist of vivid colours and dynamic geometric patterns that resemble simple VHs described in both psychedelic and clinical contexts, suggesting partially overlapping neural mechanisms. We developed and applied an unsupervised computer-vision pipeline to analyse a large dataset of 10,598 drawings made following exposure to hallucination-inducing SLS. These drawings were produced by attendees of Dreamachine, a large-scale public installation designed to elicit stroboscopically induced visual hallucinations (SIVHs). We extracted feature embeddings with a self-supervised deep vision transformer, then applied dimensionality reduction and density-based clustering to identify recurrent visual motifs in a data-driven manner. The majority of drawings contained geometric forms, consistent with prior observations of simple VHs under SLS. However, we also identified novel and underreported geometric formations, such as concentric squares, crosses, hyperbolic patterns, and other geometries. Our results show how an unsupervised computer-vision pipeline can organise large, openly shared phenomenological datasets into interpretable classes. By mapping the diversity of simple geometric VHs at scale, this work places new constraints on existing theoretical accounts and motivates targeted experimental work linking SLS parameters, neural dynamics, and geometric visual hallucinations.
Sulfaro, A. A.; Robinson, A. A.; Carlson, T. A.
Show abstract
Evidence suggests that mental imagery and veridical perception recruit similar components of the human visual system. If so, neural representations of imagined and real stimuli should interact with one another, combining constructively or competing antagonistically. To determine if and how real and imagined visual stimuli interact in the brain, we asked participants to mentally visualise white bars at specific orientations after a rhythmic countdown while their brain activity was recorded using electroencephalography. Stimuli were imagined in isolation, or while another stimulus at a highly or poorly congruent orientation appeared on-screen. Multivariate pattern analysis was used to assess whether overlap between imagined and real stimulus features enhanced or diminished stimulus-specific sensory information in the brain. Findings showed that imagined and real orientation could be decoded from brain activity, with real orientation decoding mildly amplified by highly congruent, but not poorly congruent, imagined orientations. Although interactions between real and imagined stimuli were observed, no evidence was detected to suggest that imagined and real stimuli use the same neural activity patterns to encode sensory information. Instead, congruent imagery seemed only to amplify activity which had already been induced by real percepts, targeting late- but not early-stage perceptual representations. Ultimately, this study suggests that imagined and real stimuli interact in a mildly constructive manner, with imagination mostly acting in a modulatory capacity.
Wilson, M.; Hecker, L.; Joos, E.; Aertsen, A.; Tebartz van Elst, L.; Kornmeier, J.
Show abstract
During observation of the ambiguous Necker cube, our perception suddenly reverses between two about equally possible 3D interpretations. During passive observation, perceptual reversals seem to be sudden and spontaneous. A number of theoretical approaches postulate destabilization of neural representations as a precondition for spontaneous reversals of ambiguous figures. In the current study, we focused on possible EEG correlates of perceptual destabilization, that may allow to predict an upcoming perceptual reversal. We presented ambiguous Necker cube stimuli in an onset-paradigm and investigated the neural processes underlying endogenous reversals as compared to perceptual stability across two consecutive stimulus presentations. In a separate experimental condition, disambiguated cube variants were alternated randomly, to exogenously induce perceptual reversals. We compared the EEG immediately before and during endogenous Necker cube reversals with corresponding time windows during exogenously induced perceptual reversals of disambiguated cube variants. For the ambiguous Necker cube stimuli, we found the earliest differences in the EEG between reversal trials and stability trials already one second before a reversal occurred, at bilateral parietal electrodes. The traces remained similar until approximately 1100 ms before a perceived reversal, became maximally different at around 890 ms (p=7.59*10-6, Cohens d=1.35) and remained different until shortly before offset of the stimulus preceding the reversal. No such patterns were found in the case of disambiguated cube variants. The identified EEG effects may reflect destabilized states of neural representations, related to destabilized perceptual states preceding a perceptual reversal. They further indicate that spontaneous Necker cube reversals are most probably not as spontaneous as generally thought. Rather, the destabilization may occur over a longer time scale, at least one second before a reversal event.
Suzuki, K.; Seth, A. K.; Schwartzman, D. J.
Show abstract
Visual hallucinations (VHs) are perceptions of objects or events in the absence of the sensory stimulation that would normally support such perceptions. Although all VHs share this core characteristic, there are substantial phenomenological differences between VHs that have different aetiologies, such as those arising from neurological conditions, visual loss, or psychedelic compounds. Here, we examine the potential mechanistic basis of these differences by leveraging recent advances in visualising the learned representations of a coupled classifier and generative deep neural network - an approach we call computational (neuro)phenomenology. Examining three aetiologically distinct populations in which VHs occur - neurological conditions (Parkinsons Disease and Lewy Body Dementia), visual loss (Charles Bonnet Syndrome, CBS), and psychedelics - we identify three dimensions relevant to distinguishing these classes of VHs: realism (veridicality), dependence on sensory input (spontaneity), and complexity. By selectively tuning the parameters of the visualisation algorithm to reflect influence along each of these phenomenological dimensions we were able to generate synthetic VHs that were characteristic of the VHs experienced by each aetiology. We verified the validity of this approach experimentally in two studies that examined the phenomenology of VHs in neurological and CBS patients, and in people with recent psychedelic experience. These studies confirmed the existence of phenomenological differences across these three dimensions between groups, and crucially, found that the appropriate synthetic VHs were representative of each groups hallucinatory phenomenology. Together, our findings highlight the phenomenological diversity of VHs associated with distinct causal factors and demonstrate how a neural network model of visual phenomenology can successfully capture the distinctive visual characteristics of hallucinatory experience.
Werner, A. M.; Schmidt, A. M.; Hilmers, J.; Boborzi, L.; Weigold, M.
Show abstract
In this study we show a reproduction of the dress-ambiguity phenomenon in a real scene and we report quantitative measurements of the corresponding colour perceptions. The original, real dress, known from #thedress-illusion, was illuminated by combined short- and longwavelength broadband lights from two slide projectors. Test subjects viewing the dress reported to perceive the dress fabric and lace colours as blue & black, white & gold or light blue & brown; their corresponding perceptual matches were distributed along the blue/yellow cardinal axis, and exhibited a variability comparable to the ambiguity of the dress photograph. It is particularly noteworthy that the colour ambiguity emerged despite explicit knowledge of the observers about the direction of the light source. Manipulating the background of the real dress (change in chromaticity and luminance, or masking) revealed significant differences between the perceptual groups regarding lightness and colour of the dress. Our findings suggest that observer specific differences in the perceptual organisation of the visual scene are responsible for the colour ambiguity observed for the real dress; in particular, we conclude that colour computations of white & gold viewers focused onto the local region of the dress, whereas the colour processes of blue & black and light-blue & brown viewers were strongly influenced by contextual computations including the background. Our segmentation hypothesis extends existing explanations for the dress ambiguity and proposes image based (in the case of the real scene) and high level (in the case of the photograph) neural processes which control the spatial reach of contextual colour computations. The relation between the ambiguity in our real scene and the dress photograph is discussed.
Boly, M.; Smith, R.; Vigueras Borrego, G.; Pozuelos, J. P.; Allaudin, T.; Malinowski, P.; Tononi, G.
Show abstract
Pure presence (PP) is described in several meditative traditions as an experience of a vast, vivid space devoid of perceptual objects, thoughts, and self. Integrated information theory (IIT) predicts that such vivid experiences may occur when the cortical substrate of consciousness is virtually silent. To test this, we analyzed high-quality 256-electrode high-density electroencephalography (hdEEG) from twenty-two long-term Vajrayana and Zen meditators who reported reaching PP during a week-long retreat. Because neural activity typically increases gamma power, we predicted PP would show widespread gamma reductions. Across both traditions, PP was associated with broadband power decrease compared to within-meditation mind-wandering, most consistent in the gamma range (30- 45 Hz). Source reconstruction revealed widespread gamma decreases, strongest in posteromedial cortex. PP gamma power was lower than in all other control states, including watching or imagining a movie, active thinking, and open-monitoring. PP delta power (1-4Hz) was also markedly reduced compared to dreamless sleep. Meditative states resembling PP--with minimal perceptual contents or accompanied by bliss-- showed similar signatures. Overall, PP appears to be a state of vivid consciousness during which the cortex is highly awake (low delta) yet widely quiescent (low gamma), in line with IITs predictions.
Bandara, K. H.; Rowe, E. G.; Garrido, M. I.
Show abstract
The role of the prefrontal cortex (PFC) in consciousness is hotly debated. Frontal theories argue that the PFC is necessary for consciousness, while sensory theories propose that consciousness arises from recurrent activity in the posterior cortex alone, with activity in the PFC resulting from the mere act of reporting. To resolve this dispute, we re-analysed an EEG dataset of 30 participants from a no-report inattentional blindness paradigm where faces are (un)consciously perceived. Dynamic causal modelling was used to estimate the effective connectivity between the key contended brain regions, the prefrontal and the posterior cortices. Then, a second-level parametric empirical Bayesian model was conducted to determine how connectivity was modulated by awareness and task-relevance. While an initial data-driven search could not corroborate neither sensory nor frontal theories of consciousness, a more directed hypothesis-driven analysis revealed strong evidence that both theories could explain the data, with a very slight preference for frontal theories. Specifically, a model with backward connections switched off within the posterior cortex explained awareness better (53%) than a model without backward connections from the PFC to sensory regions. Our findings provide some support for a subtle, yet crucial, contribution of the frontal cortex in consciousness, and highlight the need to revise current theories of consciousness.
Chandia-Jorquera, A.; van Mil, S. D.; Estarellas, M.; Dauphin, M.; Pascovich, C.; Canales-Johnson, A.
Show abstract
In contemplative traditions, pure awareness (PA) is described as a wakeful state largely devoid of cognitive content, often considered a case of minimal phenomenal experience (MPE). Transcendental Meditation (TM) provides an excellent empirical model for PA because it is standardized, effortless, and reliably induces reports of awareness with minimal content. We combined electroencephalography (EEG) with phenomenological reports to study PA in 33 experienced TM practitioners and matched controls (performing mental counting). TM practitioners reported greater intensity and variability of PA than controls, regardless of years of meditation practice, consistent with the notion of automatic self-transcendence. Using multivariate classification analyses of theoretically motivated EEG markers, we revealed a double dissociation in neural signatures. When contrasting TM with controls, temporal entropy and aperiodic dynamics were the strongest discriminators, while functional connectivity based on phase coherence contributed the least. In contrast, when TM was compared with its own baseline, low-frequency functional connectivity dominated, while temporal entropy contributed minimally to classification. This dissociation suggests that PA is characterized by a distinct EEG pattern of signals entropy and aperiodic neural dynamics compared to ordinary cognition, yet stabilized by slow-frequency oscillatory synchronization relative to rest. Finally, TM showed little evidence of carryover effects from meditation into subsequent rest, whereas controls showed residual changes induced by counting. Together, these results provide the most systematic electrophysiological characterization of PA to date and establish neurophenomenology as a robust framework for advancing the neuroscience of minimal phenomenal experience.
Davidson, M. J.; Graafsma, I.; Tsuchiya, N.; van Boxtel, J. J. A.
Show abstract
Perceptual filling-in (PFI) occurs when a physically-present visual target disappears from conscious perception, with its location filled in by the surrounding visual background. Compared to other visual illusions, these perceptual changes are crisp and simple, and can occur for multiple spatially-separated targets simultaneously. Contrasting neural activity during the presence or absence of PFI may complement other multistable phenomena to reveal the neural correlates of consciousness (NCC). We presented four peripheral targets over a background dynamically updating at 20 Hz. While participants reported on target disappearances/reappearances via button press/release, we tracked neural activity entrained by the background during PFI using steady-state visually evoked potentials (SSVEPs) recorded in the electroencephalogram. We found background SSVEPs closely correlated with subjective report, and increased with an increasing amount of PFI. Unexpectedly, we found that as the number of filled-in targets increased, the duration of target disappearances also increased, suggesting facilitatory interactions exist between targets in separate visual quadrants. We also found distinct spatiotemporal correlates for the background SSVEP harmonics. Prior to genuine PFI, the response at the second harmonic (40 Hz) increased before the first (20 Hz), which we tentatively link to an attentional effect. There was no difference between harmonics for physically removed stimuli. These results demonstrate that PFI can be used to study multi-object perceptual suppression when frequency-tagging the background of a visual display, and because there are distinct neural correlates for endogenously and exogenously induced changes in consciousness, that it is ideally suited to study the NCC.\n\nHighlightsO_LIPerceptual filling-in (PFI) has distinct advantages for investigating the neural correlates of consciousness.\nC_LIO_LIParticipants can accurately report graded changes in consciousness using four simultaneous buttons.\nC_LIO_LIFrequency-tagging of visual background information tracks changes in visual perception.\nC_LIO_LISpatiotemporal EEG responses differentiate PFI from phenomenally matched physical disappearances.\nC_LI
Nara, S.; Becker, L.; Hillebrand, L.; Xiang, R.; Kaiser, D.
Show abstract
Understanding the neural correlates of aesthetic experiences in natural environments is a central question in neuroaesthetics. A previous EEG study (Kaiser, 2022) identified early and temporally sustained neural representations of visual scene beauty. These results were obtained with long presentation durations (1,450 ms) and with explicit beauty judgments, rendering it unclear how presentation time and task demands shape the neural correlates of scene beauty. In two EEG experiments, we replicated this study while varying presentation time and task. Experiment 1 tested whether reducing stimulus presentation time from 1,450 ms to 100 ms altered neural representations of beauty. Experiment 2 examined whether beauty-related representations prevailed when participants performed an orthogonal task instead of explicitly judging beauty. Representational similarity analysis revealed that beauty-related neural representations emerged early (within 150 - 200 ms post-stimulus) and were sustained over time, in line with previous findings. Critically, we found that neither reduced presentation time nor the absence of an explicit beauty judgment significantly altered beauty-related neural dynamics. These results suggest that the neural correlates of scene beauty are relatively robust to stimulus presentation and task regimes, providing a potential correlate of the spontaneous perception of beauty in natural environments.
Vanbuckhave, C.; Ganis, G.
Show abstract
Previous studies suggest that visual mental imagery (VMI) acts as a weaker form of top-down visual perception (VP), with the two becoming more similar as VMI vividness increases. However, this relationship remains ill-defined, and it is unclear precisely how much weaker VMI is relative to VP. Here, we introduce an original probabilistic deep learning approach to quantify vividness at the neural level. Thirty-four participants either imagined or perceived stimuli presented at varying levels of vividness and provided trial-by-trial, picture-based vividness ratings. EEG activity recorded during VP was used to train a convolutional neural network (EEGNet) to predict perceived vividness from eight posterior electrodes located around early visual areas. A leave-one-subject-out cross-validation procedure showed that the model generalised across participants with above-chance accuracy during VP. On VP trials, predictions tracked vividness labels, with reliable interpolation to new vivid labels not included during training. Applied to VMI trials, mean expected VMI vividness remained substantially lower than expected vividness for seen stimuli but slightly higher than baseline, supporting a barely rather than quasi depictive imagery. For 91% of participants, mean expected VMI vividness was also lower than, yet scaled with, mean reported VMI vividness. This framework provides a principled way to quantify and compare VMI and VP on a shared neural-behavioural scale, with implications for studying individual differences and aphantasia.
Duay, K.; Nagai, T.
Show abstract
Realism in augmented reality (AR) hinges on the seamless blending of virtual elements into real-world environments. One possible factor influencing this realism may be the physical gamut: an internal representation of all perceivable colors within a natural scene. Previous studies on luminosity thresholds suggest that this gamut, rooted in optimal colors theory, constrains perceptual judgments. While promising, such findings were based on abstract and two-dimensional stimuli only. Before extending this framework to more realistic AR scenarios, an essential next step is to assess whether the physical gamut theory also applies to naturalistic stimuli. This study addresses that gap. Our results reveal that the physical gamut remains a valid construct for natural objects viewed in realistic scenes. Moreover, observers judgments of luminosity thresholds appear guided not only by a criterion of self-luminosity, but also by an implicit sense of naturalness. These insights pave the way for exploring AR realism through the lens of physical gamut theory.
Malipeddi, S.; Sasidharan, A.; Venugopal, R.; Ventura, B.; Bauer, C. C.; P.N., R.; Mehrotra, S.; John, J. P.; Kutty, B. M.; Northoff, G.
Show abstract
A balanced mind, or equanimity, cultivated through meditation and other spiritual practices, is considered one of the highest mental states. Its core features include deidentification and non-duality. Despite its significance, its neural correlates remain unknown. To address this, we acquired 128-channel EEG data (n = 103) from advanced and novice meditators (from the Isha Yoga tradition) and controls during an internal attention (breath-watching) and an external attention task (visual-oddball paradigm). We calculated the auto-correlation window (ACW), a measure of brains intrinsic neural timescales (INTs) and assessed equanimity through self-report questionnaires. Advanced meditators showed higher levels of equanimity and shorter duration of INTs (shorter ACW) during breath-watching, indicating deidentification with mental contents. Furthermore, they demonstrated no significant differences in INTs between tasks, indicating non-dual awareness. Finally, shorter duration of INTs correlated with the participants subjective perceptions of equanimity. In conclusion, we show that the shorter duration of brains INT may serve as a neural marker of equanimity.